已经观察到,可以从这两种方式中提取视听嵌入,以获得人验证的稳健性。但是,似乎从每个帧中生成单个话语表示的聚合器似乎并未得到很好的探索。在本文中,我们提出了一个视听网络,该网络从融合的角度考虑聚合器。我们首次在面对面验证中引入了改进的细心统计数据。然后,我们发现合并过程中的模式之间存在很强的相关性,因此提出了关节关注的合并,其中包含循环一致性以学习隐式框架间的重量。最后,将这种方式与封闭的注意机制融合在一起。所有提出的型号均在Voxceleb2开发数据集上进行培训,最佳系统分别在Voxceleb1的三个正式步道列表中获得0.18 \%,0.27 \%和0.49 \%EER,据我们所知,这是个人发布的最佳成绩确认。作为分析,生成可视化图来解释该系统如何在模态之间相互作用。
translated by 谷歌翻译
Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys $\mathcal{O}(\epsilon^{-3/2})$ in sample and communication complexities for achieving an $\epsilon$-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.
translated by 谷歌翻译
A key assumption in most existing works on FL algorithms' convergence analysis is that the noise in stochastic first-order information has a finite variance. Although this assumption covers all light-tailed (i.e., sub-exponential) and some heavy-tailed noise distributions (e.g., log-normal, Weibull, and some Pareto distributions), it fails for many fat-tailed noise distributions (i.e., ``heavier-tailed'' with potentially infinite variance) that have been empirically observed in the FL literature. To date, it remains unclear whether one can design convergent algorithms for FL systems that experience fat-tailed noise. This motivates us to fill this gap in this paper by proposing an algorithmic framework called FAT-Clipping (\ul{f}ederated \ul{a}veraging with \ul{t}wo-sided learning rates and \ul{clipping}), which contains two variants: FAT-Clipping per-round (FAT-Clipping-PR) and FAT-Clipping per-iteration (FAT-Clipping-PI). Specifically, for the largest $\alpha \in (1,2]$ such that the fat-tailed noise in FL still has a bounded $\alpha$-moment, we show that both variants achieve $\mathcal{O}((mT)^{\frac{2-\alpha}{\alpha}})$ and $\mathcal{O}((mT)^{\frac{1-\alpha}{3\alpha-2}})$ convergence rates in the strongly-convex and general non-convex settings, respectively, where $m$ and $T$ are the numbers of clients and communication rounds. Moreover, at the expense of more clipping operations compared to FAT-Clipping-PR, FAT-Clipping-PI further enjoys a linear speedup effect with respect to the number of local updates at each client and being lower-bound-matching (i.e., order-optimal). Collectively, our results advance the understanding of designing efficient algorithms for FL systems that exhibit fat-tailed first-order oracle information.
translated by 谷歌翻译
激活功能对于神经网络引入非线性至关重要。许多经验实验已经验证了各种激活功能,但有关激活功能的理论研究不足。在这项工作中,我们研究了激活功能对梯度方差的影响,并提出了一种使激活函数正常化的方法,以使所有层的梯度方差保持相同,以便神经网络可以实现更好的收敛性。首先,我们补充了先前的工作,以分析梯度方差的分析,在这种梯度的方差中,激活功能的影响仅在理想化的初始状态下,几乎不能保存在训练过程中,并获得了良好激活功能应尽可能满足的属性。其次,我们提供了一种将激活功能归一化并证明其对普遍激活功能的有效性的方法。通过观察实验,我们发现收敛速度与我们在前一部分中得出的属性大致相关。我们针对共同的激活函数进行了归一化激活函数的实验。结果表明,我们的方法始终优于其非标准化对应物。例如,就TOP-1的准确性而言,用CIFAR-100的RESNET50在RESNET50上归一化的Swish swilla swish swish swish。我们的方法通过简单地在完全连接的网络和残留网络中替换其归一化功能来改善性能。
translated by 谷歌翻译
几个世纪以来,人类文明设计了金属成型技术制作工具和物品;然而,定制的金属成形仍然昂贵和复杂。激光形成折纸}(Lasergami)是一种金属形成过程,其中激光束切割并折叠平面金属板以形成三维(3D)形状。然而,设计可由激光器可折叠的结构长期以来一直是试验和错误的实践,需要大量的心理努力,并阻碍了创造实际结构的可能性。这项工作首次演示了Lasergami可以形成先前被认为是不可能被激光形成的金属结构的自由形状的。这种技术突破通过新的计算折纸方法实现,该方法模仿花朵盛开和优化激光折叠指令。结合寻址激光视线和最小化制造能源的新想法,我们报告了一个低成本的制造框架,可以通过业余爱好者和专业人士易于采用。
translated by 谷歌翻译
正交频分复用(OFDM)已广泛应用于当前通信系统。人工智能(AI)addm接收器目前被带到最前沿替换和改进传统的OFDM接收器。在这项研究中,我们首先比较两个AI辅助OFDM接收器,即数据驱动的完全连接的深神经网络和模型驱动的COMNet,通过广泛的仿真和实时视频传输,使用5G快速原型制作系统进行跨越式-Air(OTA)测试。我们在离线训练和真实环境之间的频道模型之间的差异差异导致的模拟和OTA测试之间找到了性能差距。我们开发一种新颖的在线培训系统,称为SwitchNet接收器,以解决此问题。该接收器具有灵活且可扩展的架构,可以通过在线训练几个参数来适应真实频道。从OTA测试中,AI辅助OFDM接收器,尤其是SwitchNet接收器,对真实环境具有鲁棒,并且对未来的通信系统有前途。我们讨论了本文初步研究的潜在挑战和未来的研究。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译
Due to their ability to offer more comprehensive information than data from a single view, multi-view (multi-source, multi-modal, multi-perspective, etc.) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality becomes more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN) based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quality of each view when fusing them renders these models inexplicable, performing unsatisfactorily and inflexible in downstream remote sensing tasks. To fill this gap, in this paper, evidential deep learning is introduced to the task of aerial-ground dual-view remote sensing scene classification to model the credibility of each view. Specifically, the theory of evidence is used to calculate an uncertainty value which describes the decision-making risk of each view. Based on this uncertainty, a novel decision-level fusion strategy is proposed to ensure that the view with lower risk obtains more weight, making the classification more credible. On two well-known, publicly available datasets of aerial-ground dual-view remote sensing images, the proposed approach achieves state-of-the-art results, demonstrating its effectiveness. The code and datasets of this article are available at the following address: https://github.com/gaopiaoliang/Evidential.
translated by 谷歌翻译